There is some issue which they have to resolve here.
Someone of the light water will come.
I don't know the English term for that.
So how do we do it?
Maybe we can discuss a little bit general things because I got some questions last time
and they were actually quite interesting.
So thank you for these two, I think from data science, maybe they are in the audience.
The first question was actually about PCA.
Why can you not really make, so represent a test image with the basis which you learned
on the training set?
And the problem is that it's a matter of rank.
So you have, suppose you have 120 images in your training set, then your maximum rank
can also only be 120.
And so not as many as actually dimensions of your image have.
So you cannot really use and so if you transform now with the basis which you have learned
on these 120 images, you can represent these images perfectly.
If you use all the full number of principal components.
But it's not possible for a test image which is new.
And this has to do with this low rank approximation.
If you would now have a data set which can represent all the parameters, so all, I don't
know, what was it, 1024 by 1024 parameters, because this is the dimensions of one image,
then this would be possible.
Or in theory at least possible.
And this also is the reason why if you use bigger data sets then for training these eigenfaces,
you would get better results.
Hi, good morning.
Exactly, somehow the Beamer mark somehow not.
Thank you, great.
And the second question, now I have to think about it.
What was that again?
Oh, right, we had discussed gradient descent and the question was, why can't I initialize
with zero?
Okay, so the question was why I cannot initialize with zero.
And actually for this gradient descent examples which we have, you can initialize also the
function with zero.
So it's actually more a problem, so I was already thinking about deep learning lecture
and there you have to compute the gradients not only with respect to the input but also
with respect to the weights.
And if you would now initialize all the weights with zero, then back propagation does not
work.
So from the schedule we are a little bit behind but it's more like because the summer term
has one lecture less than the winter term.
So I'm not 100% sure which of the lectures I told last semester I will skip.
So yeah, in the end I have to make a decision then.
Let's see about that.
So first I will basically explain what really has to be in there in this lecture and then
the last lecture I'm not sure yet.
And next week I'm not here.
Either Matthias is again replacing me or Nora, it's not yet clear.
So now I'm 50 minutes behind.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:22:12 Min
Aufnahmedatum
2022-07-08
Hochgeladen am
2022-07-08 18:49:04
Sprache
en-US